Artificial intelligence is transforming the world around us, and the healthcare industry is no exception. As a result, health care entities are testing and implementing AI programs in an effort to improve patient care.
How are hospitals using AI?
- Medical Imaging: Analysts estimate roughly 3-5 percent of medical imaging tests results in diagnostic error – that is up to 40 million tests a year. [i] Now, hospitals are turning to AI in hopes of reducing this margin of error. For example, University Hospitals in Cleveland is adopting an AI program to target early detection of lung cancer. [ii] The program serves as a second set of eyes to the radiologist reading chest scans. The goal is to catch suspicious nodules sooner, and in turn, improve survival rates but does not replace the radiologist’s initial review.
- Infection Diagnosis and Prevention: Despite being treatable when detected early, sepsis is responsible for a third of all hospital deaths. To facilitate early detection, hospitals are testing an AI system that automatically assesses a patient’s sepsis risk every 20 minutes. [iii] While the reviews have been mixed, a Duke Institute for Health Innovation study found “early warning system could predict sepsis a median time of 5 hours before clinical presentation and, given the high morbidity and mortality of sepsis, had the potential to save 8 lives a month.” [iv]
- Administrative Tasks: On average, medical providers spend 1.77 hours per day drafting patient notes outside of their regular work hours. [v] This significantly burdens providers, and in fact, is the number one cause of physician burnout. [vi] In response, several hospitals are utilizing AI. In February 2025, the Cleveland Clinic announced its partnership with Ambience, an AI note taking platform, [vii] in which providers can record appointments, generate comprehensive notes regarding said appointment, and develop summarized care plans for patients.
What does this mean for the future of medical claims?
While the potential benefits of AI on healthcare are apparent, there has been little discussion (or case law) on the impact AI will have on medical malpractice liability. Most notably, who is responsible when a patient is injured?
Imagine a patient receives a routine CT scan, which reveals a small lung nodule. However, given how minuscule the nodule is, the physician does not flag it for concern – but neither does the hospital’s AI imaging software. Years later, the nodule turns cancerous. The patient brings a lawsuit for medical negligence, alleging that the failure to identify the lung nodule as suspicious caused his nodule to metastasize and delayed his treatment. Who is liable? The medical provider? The AI program? The hospital who hired the doctor and implemented the AI system? All of the above?
Now, imagine the same scenario, but this time, AI labels the nodule as “potentially suspicious.” Under the circumstances, the risk of metastasis is less than .5%. The patient is strongly opposed to treatment because it will cost $50,000 out of pocket and require considerable time off work. Should the doctor automatically recommend treatment because the AI software labeled the nodule as “potentially suspicious”? If the doctor does not recommend treatment, and the nodule turns cancerous, could he be at fault? Alternatively, what if the doctor recommends treatment, but later learns that the AI software improperly labeled the nodule as suspicious and treatment was never necessary? Who gets blamed?
We do not have answers to these questions, and no case law has established a precedent. But moving forward, health care entities may want to consider how (if at all) liability could shift when intertwining AI with healthcare. If your organization is implementing AI technologies or facing questions about liability and medical claims, reach out to our experienced Health & Medicine practice group to discuss how we can support you in navigating the evolving landscape of AI in healthcare.
[i] Improving Accuracy in Radiology Images and Reports, GE HealthCare, https://www.gehealthcare.com/insights/article/improving-accuracy-in-radiology-images-and-reports?srsltid=AfmBOop8JorX8V7d6527WQufFUXjsjnV4hPmglU5seCI6hBq37qy_0pc (June 13, 2023).
[ii] Carly Belsterling, UH Cleveland Medical Center Activates AI for Early Lung Cancer Identification, https://news.uhhospitals.org/news-releases/articles/2025/04/uh-cleveland-medical-center-activates-ai-for-early-lung-cancer-identification (Apr. 2, 2025).
[iii] Derek Smith, Widely Used AI Tool for Early Sepsis Detection May be Cribbing Doctors’ Suspicions, https://news.engin.umich.edu/2024/02/widely-used-ai-tool-for-early-sepsis-detection-may-be-cribbing-doctors-suspicions/ (Feb. 16, 2024).
[iv] Sepsis Watch: the Implementation of Duke-Specific Early Warning System for Sepsis, https://dihi.org/project/sepsiswatch/ (last visited July 1, 2025).
[v] Sarina Schager, Writing Clinical Notes: Have We Made Progress?, 29 Fam Pract Manag. 4, at 3-4 (July 2022).
[vi] Stop Physician Burnout: The Hidden Danger of AI Note Writing Software, https://www.physiciansweekly.com/stop-physician-burnout-the-hidden-danger-of-ai-note-writing-software/ (Sept. 9, 2024).
[vii] Cleveland Clinic Announces Rollout of Ambience Healthcare’s AI Platform, Newsroom.ClevelandClinic.org, https://newsroom.clevelandclinic.org/2025/02/19/cleveland-clinic-announces-the-rollout-of-ambience-healthcares-ai-platform (Feb. 19, 2025).